home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Danny Amor's Online Library
/
Danny Amor's Online Library - Volume 1.iso
/
bbs
/
society
/
society.lha
/
PUB
/
isoc_news
/
1-3
/
n-1-3-030.50.2a
< prev
next >
Wrap
Text File
|
1995-07-21
|
6KB
|
113 lines
N-1-3-030.50.2, Gigabit Networks, by Craig Partridge*, <craig@aland.bbn.com>
As several researchers are fond of pointing out, transmitting and
receiving at gigabit speeds isn't the hard problem facing us. The
really hard problem is achieving gigabit speeds while providing
service guarantees. These service guarantees include features like
bounding the maximum delay or ensuring a video transmission will get
enough bandwidth.
In this column, we'll look briefly at a piece of the problem of
providing service guarantees: traffic shaping. The basic idea behind
traffic shaping is the following. Assume we have a flow (or stream,
or connection, or whatever is your favorite word for transmission path
between two transport endpoints) which needs the network to guarantee
that the end-to-end delay over the flow will be less than a certain
amount, and the loss rate will not exceed a certain level. To make
those kinds of guarantees, the network needs to know something about
how the flow is going to behave. For example, if the flow is going to
be sending 1 megabit packets at an average of 1 gigabit per second, a
different path may be required than for a flow sending 1 Kbit packets
at 56Kbit/s. The idea behind traffic shaping is to find ways to
describe how the flow will place packets into the network (size of
packets, average bandwidth, etc) in such a way that the network both
knows what to expect (and can punish flows which exceed the behavior
promised), and can use the description of the flow to do scheduling
inside the network. (As an aside, if the network does no scheduling,
it can clearly make no promises. Exactly how much scheduling is
required, however, is a hot topic of debate).
Probably the most popular way to do traffic shaping is by using of a
set of schemes collectively known as "leaky bucket". There are
actually a number of variations on leaky bucket. I'll describe three
versions here.
The first and original version of leaky bucket is the simplest. It
was designed to work with Asynchronous Transfer Mode (ATM). ATM
transmits fixed-size packets called cells. Whenever a flow wants to
send a bunch of cells, it puts the cells into a fixed-size queue,
called a bucket. At some rate, R, the cells are removed, one by one,
from the bottom of the bucket and sent over the network. If the flow
tries to put too many cells into the bucket, the bucket "overflows".
Essentially, what simple leaky bucket does is convert a possibly
bursty stream of cells from the flow into a constant bit-rate
transmission pattern on the network.
The problem with simple leaky bucket is that it is too simple. A lot
of data communications traffic has the property that its average
bitrate is pretty low, but there are occasional large bursts of
traffic. Simple leaky bucket will force the flow to describe itself
as sending at the maximum burst rate, rather than the average rate, if
the flow wants to get enough bandwidth to send its bursts quickly.
But since the flow will usually not send at the burst rate, the
network will likely over-allocate resources to the flow. So traffic
shaping according to simple leaky bucket is likely to lead to poor
utilization of the network. For this reason, many researchers
currently prefer another version of leaky bucket, sometimes called
"token bucket".
In a token bucket scheme, instead of holding data, the bucket holds
tokens, where each token represents the right to send a certain amount
of data (usually a byte or a cell). The bucket fills with credits at
rate R. Whenever the bucket gets full, the extra credits overflow the
bucket and are discarded.
When a flow wants to send some data (a packet or a cell), it looks in
the token bucket to see if the bucket contains a number of tokens
equal to the amount of data the flow wants to send. If there are
enough tokens, the data is sent, and the tokens are removed from the
bucket. If there are not enough tokens, the flow must wait until
enough tokens have accumulated.
Token bucket allows a flow to be bursty (because if the bucket is
full, the flow can send an amount of data equal to size of the bucket
without waiting) but a flow's burstiness is limited (after the bucket
has emptied, the flow has to wait for it to start to fill again).
Expressed formally, the token bucket scheme says that if the bucket
size is B, in any time interval T, the maximum amount of data that the
flow can send is equal to B+(R*T) tokens.
There's one small problem with token bucket. What if the bucket size
is really big? Then, in the worst case, a flow could blast a whole
bucket's worth of data onto its network link without waiting. Now
what if that network link is a LAN shared among several hosts?
Perhaps we don't want one flow to be able to hog the link for very
long. One way to solve this problem is use both a token bucket and a
leaky bucket in sequence. After the token bucket lets the data past,
the data is thrown into the leaky bucket. The leaky bucket rate is
set very high (much higher than the token bucket rate) but still below
the link bandwidth. This way, even if the flow is blasting away, the
leaky bucket ensures there is some bandwidth left for other hosts on
the LAN.
That's a quick survey of leaky bucket. By now, I expect you're
beginning to wonder why you needed this much detail on leaky bucket.
The answer is that early this year there was a very nice Ph.D.
dissertation published by Abhay K.J. Parekh at MIT. Parekh proved
that in networks that used Fair Queueing in their gateways, if flows
were described (and constrained) by the token bucket plus leaky bucket
scheme described in the last paragraph, it is possible to make
guarantees about worst-case delay and the minimum bandwidth a flow
will receive. In other words, we now know at least one way to provide
performance guarantees to bursty traffic sources!
Parekh's dissertation, "A Generalized Processor Sharing Approach to
Flow Control in Integrated Services Networks" (1991 Ph.D. -- LIDS
Report 2089), is available from MIT for about $50 USD [call +1 (617)
253-5650 to order a copy].
*Research Scientist, BBN